Large model hallucinations can be mainly divided into three categories: those that conflict with input, context, and facts. Compared to traditional models, large model hallucination evaluation faces new challenges such as large data scales, strong generalizability, and subtlety. Mitigating hallucinations can be approached from pre-training, fine-tuning, and reinforcement learning, but reliable evaluation remains a challenge. Overall, the evaluation and mitigation of large model hallucinations require further research to promote the practical application of large models.